batch policy optimization
Surrogate Objectives for Batch Policy Optimization in One-step Decision Making
We investigate batch policy optimization for cost-sensitive classification and contextual bandits---two related tasks that obviate exploration but require generalizing from observed rewards to action selections in unseen contexts. When rewards are fully observed, we show that the expected reward objective exhibits suboptimal plateaus and exponentially many local optima in the worst case. To overcome the poor landscape, we develop a convex surrogate that is calibrated with respect to entropy regularized expected reward. We then consider the partially observed case, where rewards are recorded for only a subset of actions. Here we generalize the surrogate to partially observed data, and uncover novel objectives for batch contextual bandit training. We find that surrogate objectives remain provably sound in this setting and empirically demonstrate state-of-the-art performance.
Reviews: Surrogate Objectives for Batch Policy Optimization in One-step Decision Making
Summary: The main points in the paper are: -- expected reward objective has exponentially many local maxima -- smooth risk and hence, the new loss L(q, r, x) which are both calibrated can be used and L is strongly convex implying a unique global optimum. Originality: The work is original. Clarity: The paper is clear to read, except some details in the experimental section, on page 4, where the meanings of the risk R(\pi) is not described clearly. Significance and comments: First, in the new objective for contextual bandits, the authors mention that this objective is not the same as the trust-region or proximal objectives used in RL (line 237), but how does this compare with the maximum entropy RL (for example, Harrnoja et.al, Soft Q-learning and Soft actor-critic) objectives with the same policy and value function/reward models? In these maxent RL formulations, an estimator similar to Eqn 12, Page 5 is optimized.
Surrogate Objectives for Batch Policy Optimization in One-step Decision Making
We investigate batch policy optimization for cost-sensitive classification and contextual bandits---two related tasks that obviate exploration but require generalizing from observed rewards to action selections in unseen contexts. When rewards are fully observed, we show that the expected reward objective exhibits suboptimal plateaus and exponentially many local optima in the worst case. To overcome the poor landscape, we develop a convex surrogate that is calibrated with respect to entropy regularized expected reward. We then consider the partially observed case, where rewards are recorded for only a subset of actions. Here we generalize the surrogate to partially observed data, and uncover novel objectives for batch contextual bandit training.
Model Selection in Batch Policy Optimization
Lee, Jonathan N., Tucker, George, Nachum, Ofir, Dai, Bo
We study the problem of model selection in batch policy optimization: given a fixed, partial-feedback dataset and $M$ model classes, learn a policy with performance that is competitive with the policy derived from the best model class. We formalize the problem in the contextual bandit setting with linear model classes by identifying three sources of error that any model selection algorithm should optimally trade-off in order to be competitive: (1) approximation error, (2) statistical complexity, and (3) coverage. The first two sources are common in model selection for supervised learning, where optimally trading-off these properties is well-studied. In contrast, the third source is unique to batch policy optimization and is due to dataset shift inherent to the setting. We first show that no batch policy optimization algorithm can achieve a guarantee addressing all three simultaneously, revealing a stark contrast between difficulties in batch policy optimization and the positive results available in supervised learning. Despite this negative result, we show that relaxing any one of the three error sources enables the design of algorithms achieving near-oracle inequalities for the remaining two. We conclude with experiments demonstrating the efficacy of these algorithms.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
Surrogate Objectives for Batch Policy Optimization in One-step Decision Making
Chen, Minmin, Gummadi, Ramki, Harris, Chris, Schuurmans, Dale
We investigate batch policy optimization for cost-sensitive classification and contextual bandits---two related tasks that obviate exploration but require generalizing from observed rewards to action selections in unseen contexts. When rewards are fully observed, we show that the expected reward objective exhibits suboptimal plateaus and exponentially many local optima in the worst case. To overcome the poor landscape, we develop a convex surrogate that is calibrated with respect to entropy regularized expected reward. We then consider the partially observed case, where rewards are recorded for only a subset of actions. Here we generalize the surrogate to partially observed data, and uncover novel objectives for batch contextual bandit training.